Goto

Collaborating Authors

 ai bill


Advancing Data Equity: Practitioner Responsibility and Accountability in NLP Data Practices

Cunningham, Jay L., Shao, Kevin Zhongyang, Pang, Rock Yuren, Mengist, Nathaniel

arXiv.org Artificial Intelligence

While research has focused on surfacing and auditing algorithmic bias to ensure equitable AI development, less is known about how NLP practitioners - those directly involved in dataset development, annotation, and deployment - perceive and navigate issues of NLP data equity. This study is among the first to center practitioners' perspectives, linking their experiences to a multi-scalar AI governance framework and advancing participatory recommendations that bridge technical, policy, and community domains. Drawing on a 2024 questionnaire and focus group, we examine how U.S.-based NLP data practitioners conceptualize fairness, contend with organizational and systemic constraints, and engage emerging governance efforts such as the U.S. AI Bill of Rights. Findings reveal persistent tensions between commercial objectives and equity commitments, alongside calls for more participatory and accountable data workflows. We critically engage debates on data diversity and diversity washing, arguing that improving NLP equity requires structural governance reforms that support practitioner agency and community consent.


UK ministers delay AI regulation amid plans for more 'comprehensive' bill

The Guardian

This will not be ready before the next king's speech, and is likely to trigger concerns about delays to regulating the technology. The date for the next king's speech has not been set but several sources said it could take place in May 2026. Labour had originally planned to introduce a short, narrowly drafted AI bill within months of entering office that would have been focused on large language models, such as ChatGPT. The legislation would have required companies to hand over their models for testing by the UK's AI Security Institute. It was intended to address concerns that AI models could become so advanced that they posed a risk to humanity.


UK delays plans to regulate AI as ministers seek to align with Trump administration

The Guardian

Ministers have delayed plans to regulate artificial intelligence as the UK government seeks to align itself with Donald Trump's administration on the technology, the Guardian has learned. A long-awaited AI bill, which ministers had originally intended to publish before Christmas, is not expected to appear in parliament before the summer, according to three Labour sources briefed on the plans. Ministers had intended to publish a short bill within months of entering office that would have required companies to hand over large AI models such as ChatGPT for testing by the UK's AI Security Institute. Trump's election has led to a rethink, however. A senior Labour source said the bill was "properly in the background" and that there were still "no hard proposals in terms of what the legislation looks like".


The Download: Congress's AI bills, and Snap's new AR spectacles

MIT Technology Review

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress. This flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at them all can give us insight into policymakers' current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated. That's why, with help from the Brennan Center for Justice, we've created a tracker with all the AI bills circulating in various committees in Congress right now, to see if there's anything we can learn from this legislative smorgasbord. Here's what I made of Snap's new augmented-reality Spectacles Snap has announced a new version of its Spectacles: AR glasses that could finally deliver on the promises that devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago.


There are more than 120 AI bills in Congress right now

MIT Technology Review

A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don't feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill's sponsor usually needs to get some members of the opposite party to support it for it to pass.


Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers

Oesterling, Alex, Bhalla, Usha, Venkatasubramanian, Suresh, Lakkaraju, Himabindu

arXiv.org Artificial Intelligence

As Artificial Intelligence (AI) tools are increasingly employed in diverse real-world applications, there has been significant interest in regulating these tools. To this end, several regulatory frameworks have been introduced by different countries worldwide. For example, the European Union recently passed the AI Act, the White House issued an Executive Order on safe, secure, and trustworthy AI, and the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights (AI BoR). Many of these frameworks emphasize the need for auditing and improving the trustworthiness of AI tools, underscoring the importance of safety, privacy, explainability, fairness, and human fallback options. Although these regulatory frameworks highlight the necessity of enforcement, practitioners often lack detailed guidance on implementing them. Furthermore, the extensive research on operationalizing each of these aspects is frequently buried in technical papers that are difficult for practitioners to parse. In this write-up, we address this shortcoming by providing an accessible overview of existing literature related to operationalizing regulatory principles. We provide easy-to-understand summaries of state-of-the-art literature and highlight various gaps that exist between regulatory guidelines and existing AI research, including the trade-offs that emerge during operationalization. We hope that this work not only serves as a starting point for practitioners interested in learning more about operationalizing the regulatory guidelines outlined in the Blueprint for an AI BoR but also provides researchers with a list of critical open problems and gaps between regulations and state-of-the-art AI research. Finally, we note that this is a working paper and we invite feedback in line with the purpose of this document as described in the introduction.


Who Followed the Blueprint? Analyzing the Responses of U.S. Federal Agencies to the Blueprint for an AI Bill of Rights

Lage, Darren, Pruitt, Riley, Arnold, Jason Ross

arXiv.org Artificial Intelligence

This study examines the extent to which U.S. federal agencies responded to and implemented the principles outlined in the White House's October 2022 "Blueprint for an AI Bill of Rights." The Blueprint provided a framework for the ethical governance of artificial intelligence systems, organized around five core principles: safety and effectiveness, protection against algorithmic discrimination, data privacy, notice and explanation about AI systems, and human alternatives and fallback. Through an analysis of publicly available records across 15 federal departments, the authors found limited evidence that the Blueprint directly influenced agency actions after its release. Only five departments explicitly mentioned the Blueprint, while 12 took steps aligned with one or more of its principles. However, much of this work appeared to have precedents predating the Blueprint or motivations disconnected from it, such as compliance with prior executive orders on trustworthy AI. Departments' activities often emphasized priorities like safety, accountability and transparency that overlapped with Blueprint principles, but did not necessarily stem from it. The authors conclude that the non-binding Blueprint seems to have had minimal impact on shaping the U.S. government's approach to ethical AI governance in its first year. Factors like public concerns after high-profile AI releases and obligations to follow direct executive orders likely carried more influence over federal agencies. More rigorous study would be needed to definitively assess the Blueprint's effects within the federal bureaucracy and broader society.



UK, US, EU and China sign declaration of AI's 'catastrophic' danger

The Guardian

The UK, US, EU and China have all agreed that artificial intelligence poses a potentially catastrophic risk to humanity, in the first international declaration to deal with the fast-emerging technology. Twenty-eight governments signed up to the so-called Bletchley declaration on the first day of the AI safety summit, hosted by the British government. The declaration does not agree to set up an international testing hub in the UK, as some in the British government had hoped. But it does provide a template for international collaboration in the future, with future safety summits now planned in South Korea in six months' time and in France in a year. The declaration says: "There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models."


Britain must become a leader in AI regulation, say MPs

The Guardian

The UK should introduce new legislation to control artificial intelligence or risk falling behind the EU and the US in setting the pace for regulating the technology, MPs have said. Rishi Sunak's government was urged to act as it prepares to host a global AI safety summit at Bletchley Park, home of the Enigma codebreakers, in November. The science, innovation and technology committee said on Thursday the regulatory approach outlined in a recent government white paper risked falling behind others. "The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI," the committee said in an interim report on AI governance. "This threat is made more acute by the efforts of other jurisdictions, principally the European Union and the United States, to set international standards." The EU, a trendsetter in tech regulation, is pushing ahead with the AI Act, while in the US the White House has published a blueprint for an AI bill of rights and the US senate majority leader, Chuck Schumer, has published a framework for developing AI regulations.